我们为场景的生成模型提出了一个无监督的中级表示。该表示是中等水平的,因为它既不是人均也不是每图像。相反,场景被建模为一系列空间,深度订购的特征“斑点”。斑点分化在特征网格上,该特征网格被生成对抗网络解码为图像。由于斑点的空间均匀性和卷积固有的局部性,我们的网络学会了将不同的斑点与场景中的不同实体相关联,并安排这些斑点以捕获场景布局。我们通过证明,尽管没有任何监督训练,但我们的方法启用了诸如场景中的物体(例如,移动,卸下和修复家具),创建可行场景(例如,可靠的,Plaausible(例如,可靠),我们的方法可以轻松地操纵对象(例如,可行的情况)来证明这种紧急行为。带抽屉在特定位置的房间),将现实世界图像解析为组成部分。在充满挑战的室内场景的多类数据集上,Blobgan在FID测量的图像质量中优于图像质量。有关视频结果和交互式演示,请参见我们的项目页面:https://www.dave.ml/blobgan
translated by 谷歌翻译
Recent work leverages the expressive power of generative adversarial networks (GANs) to generate labeled synthetic datasets. These dataset generation methods often require new annotations of synthetic images, which forces practitioners to seek out annotators, curate a set of synthetic images, and ensure the quality of generated labels. We introduce the HandsOff framework, a technique capable of producing an unlimited number of synthetic images and corresponding labels after being trained on less than 50 pre-existing labeled images. Our framework avoids the practical drawbacks of prior work by unifying the field of GAN inversion with dataset generation. We generate datasets with rich pixel-wise labels in multiple challenging domains such as faces, cars, full-body human poses, and urban driving scenes. Our method achieves state-of-the-art performance in semantic segmentation, keypoint detection, and depth estimation compared to prior dataset generation approaches and transfer learning baselines. We additionally showcase its ability to address broad challenges in model development which stem from fixed, hand-annotated datasets, such as the long-tail problem in semantic segmentation.
translated by 谷歌翻译
Reflections on glossy objects contain valuable and hidden information about the surrounding environment. By converting these objects into cameras, we can unlock exciting applications, including imaging beyond the camera's field-of-view and from seemingly impossible vantage points, e.g. from reflections on the human eye. However, this task is challenging because reflections depend jointly on object geometry, material properties, the 3D environment, and the observer viewing direction. Our approach converts glossy objects with unknown geometry into radiance-field cameras to image the world from the object's perspective. Our key insight is to convert the object surface into a virtual sensor that captures cast reflections as a 2D projection of the 5D environment radiance field visible to the object. We show that recovering the environment radiance fields enables depth and radiance estimation from the object to its surroundings in addition to beyond field-of-view novel-view synthesis, i.e. rendering of novel views that are only directly-visible to the glossy object present in the scene, but not the observer. Moreover, using the radiance field we can image around occluders caused by close-by objects in the scene. Our method is trained end-to-end on multi-view images of the object and jointly estimates object geometry, diffuse radiance, and the 5D environment radiance field.
translated by 谷歌翻译
Performing 3D dense captioning and visual grounding requires a common and shared understanding of the underlying multimodal relationships. However, despite some previous attempts on connecting these two related tasks with highly task-specific neural modules, it remains understudied how to explicitly depict their shared nature to learn them simultaneously. In this work, we propose UniT3D, a simple yet effective fully unified transformer-based architecture for jointly solving 3D visual grounding and dense captioning. UniT3D enables learning a strong multimodal representation across the two tasks through a supervised joint pre-training scheme with bidirectional and seq-to-seq objectives. With a generic architecture design, UniT3D allows expanding the pre-training scope to more various training sources such as the synthesized data from 2D prior knowledge to benefit 3D vision-language tasks. Extensive experiments and analysis demonstrate that UniT3D obtains significant gains for 3D dense captioning and visual grounding.
translated by 谷歌翻译
This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems. Building on expertise that began with cooling Google's data centers more efficiently, we recently conducted live experiments on two real-world facilities in partnership with Trane Technologies, a building management system provider. These live experiments had a variety of challenges in areas such as evaluation, learning from offline data, and constraint satisfaction. Our paper describes these challenges in the hope that awareness of them will benefit future applied RL work. We also describe the way we adapted our RL system to deal with these challenges, resulting in energy savings of approximately 9% and 13% respectively at the two live experiment sites.
translated by 谷歌翻译
The selection of an optimal pacing site, which is ideally scar-free and late activated, is critical to the response of cardiac resynchronization therapy (CRT). Despite the success of current approaches formulating the detection of such late mechanical activation (LMA) regions as a problem of activation time regression, their accuracy remains unsatisfactory, particularly in cases where myocardial scar exists. To address this issue, this paper introduces a multi-task deep learning framework that simultaneously estimates LMA amount and classify the scar-free LMA regions based on cine displacement encoding with stimulated echoes (DENSE) magnetic resonance imaging (MRI). With a newly introduced auxiliary LMA region classification sub-network, our proposed model shows more robustness to the complex pattern cause by myocardial scar, significantly eliminates their negative effects in LMA detection, and in turn improves the performance of scar classification. To evaluate the effectiveness of our method, we tests our model on real cardiac MR images and compare the predicted LMA with the state-of-the-art approaches. It shows that our approach achieves substantially increased accuracy. In addition, we employ the gradient-weighted class activation mapping (Grad-CAM) to visualize the feature maps learned by all methods. Experimental results suggest that our proposed model better recognizes the LMA region pattern.
translated by 谷歌翻译
最近的研究揭示了NLP数据和模型中的不良偏见。但是,这些努力的重点是西方的社会差异,并且无法直接携带其他地质文化背景。在本文中,我们关注印度背景下的NLP公平。我们首先简要说明印度的社会差异斧头。我们为印度背景下的公平评估建立资源,并利用它们来证明沿着某些轴的预测偏见。然后,我们深入研究了地区和宗教的社会刻板印象,证明了其在Corpora&Models中的普遍性。最后,我们概述了一个整体研究议程,以重新定义印度背景的NLP公平研究,考虑印度社会背景,弥合能力,资源和适应印度文化价值的技术差距。尽管我们在这里专注于“印度”,但可以在其他地理文化背景下进行重新连接化。
translated by 谷歌翻译
多个现有基准测试涉及视频中的跟踪和分割对象,例如,视频对象细分(VOS)和多对象跟踪和分割(MOTS)(MOTS),但是由于使用不同的基准标准数据集和指标,它们之间几乎没有相互作用(例如J&F,J&F,J&F,J&F,地图,smotsa)。结果,已发表的作品通常针对特定的基准,并且不容易相互媲美。我们认为,可以解决多个任务的广义方法的发展需要在这些研究子社区中更大的凝聚力。在本文中,我们旨在通过提出爆发来促进这一点,该数据集包含数千个带有高质量对象掩码的视频,以及一个相关的基准标准,其中包含六个任务,涉及视频中的对象跟踪和细分。使用相同的数据和可比较的指标对所有任务进行评估,这使研究人员能够一致考虑它们,因此更有效地从不同任务的不同方法中汇集了知识。此外,我们为所有任务展示了几个基线,并证明可以将一个任务的方法应用于另一个任务,并具有可量化且可解释的性能差异。数据集注释和评估代码可在以下网址获得:https://github.com/ali2500/burst-benchmark。
translated by 谷歌翻译
预计在现实世界中部署的NLU系统将定期更新或对随着时间的推移积累的新培训示例的基础神经网络进行重新更新。在我们的工作中,我们专注于多语言环境,在该环境中,我们希望在该设置中进一步捕获有关上述模型已经接受过培训的NLU任务的新培训数据的多语言模型。我们表明,在某些条件下,天真地更新多语言模型可能会导致语言子集的性能损失,尽管汇总性能指标显示出改进。我们在属于三个任务系列(令牌级,句子级别和SEQ2SEQ)的四个任务上建立了这种现象,并发现基线远非手头设置的理想选择。然后,我们基于最近进步的参数有效填充,以开发新颖的填充管道,使我们能够共同最大程度地减少灾难性的遗忘,同时鼓励积极的跨语言转移,从而改善不同语言的增长,同时减少这种设置中损失的损失。
translated by 谷歌翻译
在未来几十年中部署的高级反应堆将面临放松管制的能源市场,并可能采用灵活的运营来提高盈利能力。为了帮助从基本负载到柔性操作范式的过渡,寻求自动操作。这项工作着重于自主操作的控制方面。具体而言,层次控制系统旨在支持常规操作瞬变期间的约束执法。在系统中,集成了数据驱动的建模,基于物理的状态观察和经典控制算法,以提供适应性和健壮的解决方案。 320 MW氟化物冷却的高温卵石床反应器是证明控制系统的设计基础。分层控制系统由监督层和低级层组成。监督层收到更改系统操作条件的请求,并根据已分配的约束接受或拒绝它们。发出限制条件以使工厂保持最佳操作区域。低级层与系统的执行器接口,以实现要求的更改,同时保持跟踪和调节职责。为了接受监督层的请求,采用了参考调查算法。为了建模反应器的动力学,使用了系统识别算法,动态模式分解。为了估计无法直接测量的过程变量的演变,采用了无味的卡尔曼滤波器,并结合了核动力学的非线性模型。这些算法的组成导致了40%功率降低瞬变期间约束执法的数值证明。通过修改约束值并在瞬态期间执行这些系统来证明所提出系统的适应性。在嘈杂的环境下执行约束也证明了鲁棒性。
translated by 谷歌翻译